The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] learning algorithm(27hit)

21-27hit(27hit)

  • Iterative Middle Mapping Learning Algorithm for Cellular Neural Networks

    Chen HE  Akio USHIDA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    706-715

    In this paper, a middle-mapping learning algorithm for cellular associative memories is presented. This algorithm makes full use of the properties of the cellular neural network so that the associative memory has some advantages compared with the memory designed by the ourter product method. It can guarantee each prototype is stored at an equilibrium point. In the practical implementation, it is easy to build up the circuit because the weight matrix presenting the connection between cells is not symmetric. The synchronous updating rule makes its associative speed very fast compared to the Hopfield associative memory.

  • Invariant Object Recognition by Artificial Neural Network Using Fahlman and Lebiere's Learning Algorithm

    Kazuki ITO  Masanori HAMAMOTO  Joarder KAMRUZZAMAN  Yukio KUMAGAI  

     
    LETTER-Neural Networks

      Vol:
    E76-A No:7
      Page(s):
    1267-1272

    A new neural network system for object recognition is proposed which is invariant to translation, scaling and rotation. The system consists of two parts. The first is a preprocessor which obtains projection from the input image plane such that the projection features are translation and scale invariant, and then adopts the Rapid Transform which makes the transformed outputs rotation invariant. The second part is a neural net classifier which receives the outputs of preprocessing part as the input signals. The most attractive feature of this system is that, by using only a simple shift invariant transformation (Rapid transformation) in conjunction with the projection of the input image plane, invariancy is achieved and the system is of reasonably small size. Experiments with six geometrical objects with different degrees of scaling and rotation shows that the proposed system performs excellent when the neural net classifier is trained by the Cascade-correlation learning algorithm proposed by Fahlman and Lebiere.

  • Incremental Learning and Generalization Ability of Artificial Neural Network Trained by Fahlman and Lebiere's Learning Algorithm

    Masanori HAMAMOTO  Joarder KAMRUZZAMAN  Yukio KUMAGAI  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E76-A No:2
      Page(s):
    242-247

    We apply Fahlman and Lebiere's (FL) algorithm to network synthesis and incremental learning by making use of already-trained networks, each performing a specified task, to design a system that performs a global or extended task without destroying the information gained by the previously trained nets. Investigation shows that the synthesized or expanded FL networks have generalization ability superior to Back propagation (BP) networks in which the number of newly added hidden units must be pre-specified.

  • An Adaptive Fuzzy Network

    Zheng TANG  Okihiko ISHIZUKA  Hiroki MATSUMOTO  

     
    LETTER-Fuzzy Theory

      Vol:
    E75-A No:12
      Page(s):
    1826-1828

    An adaptive fuzzy network (AFN) is described that can be used to implement most of fuzzy logic functions. We introduce a learning algorithm largely borrowed from backpropagation algorithm and train the AFN system for several typical fuzzy problems. Simulations show that an adaptive fuzzy network can be implemented with the proposed network and algorithm, which would be impractical for a conventional fuzzy system.

  • Generalization Ability of Feedforward Neural Network Trained by Fahlman and Lebiere's Learning Algorithm

    Masanori HAMAMOTO  Joarder KAMRUZZAMAN  Yukio KUMAGAI  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E75-A No:11
      Page(s):
    1597-1601

    Fahlman and Lebiere's (FL) learning algorithm begins with a two-layer network and in course of training, can construct various network architectures. We applied FL algorithm to the same three-layer network architecture as a back propagation (BP) network and compared their generalization properties. Simulation results show that FL algorithm yields excellent saturation of hidden units which can not be achieved by BP algorithm and furthermore, has more desirable generalization ability than that of BP algorithm.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : The Weighted Subspace Criterion

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    366-375

    Principal Component Analysis (PCA) is a useful technique in feature extraction and data compression. It can be formulated as a statistical constrained maximization problem, whose solution is given by unit eigenvectors of the data covariance matrix. In a practical application like image compression, the problem can be solved numerically by a corresponding gradient ascent maximization algorithm. Such on-line algoritms can be good alternatives due to their parallelism and adaptivity to input data. The algorithms can be implemented in a local and homogeneous way in learning neural networks. One example is the Subspace Network. It is a regular layer of parallel artificial neurons with a learning rule that is completely homogeneous with respect to the neurons. However, due to the complete homogeneity, the learning rule does not converge to the unique basis given by the dominant eigenvectors, but any basis of this eigenvector subspace is possible. In many applications like data compression, the subspace is not sufficient but the actual eigenvectors or PCA coefficient vectors are needed. A new criterion, called the Weighted Subspace Criterion, is proposed, which makes a small symmetry-breaking change to the Subspace Criterion. Only the true eigenvectors are solutions. Making the corresponding change to the learning rule of the Subspace Network gives a modified learning rule, which can be still implemented on a homogeneous network architecture. In learning, the weight vectors will tend to the true eigenvectors.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : Analysis and Extensions of the Learning Algorithms

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    376-382

    Artificial neurons and neural networks have been shown to perform Principal Component Analysis (PCA) when gradient ascent learning rules are used, which are related to the constrained maximization of statistical objective functions. Due to their parallelism and adaptivity to input data, such algorithms and their implementations in neural networks are potentially useful in feature extraction and data compression. In the companion paper(9), two such learning rules were derived from two criteria, the Subspace Criterion and the Weighted Subspace Criterion. It was shown that the only solutions to the latter problem are dominant eigenvectors of the data covariance matrix, which are the basis vectors of PCA. It was suggested by a simulation that the corresponding learning algorithm converges to these eigenvectors. A homogeneous neural network implementation was proposed for the algorithm. The learning algorithm is analyzed here in detail and it is shown that it can be approximated by a continuous-time differential equation that is obtained by averaging. It is shown that the asymptotically stable limits of this differntial equation are the eigenvectors. The neural network learning algorithm is further extended to a case in which each neuron has a sigmoidal nonlinear feedback activity function. Then no parameters specific to each neuron are needed, and the learning rule is fully homogeneous.

21-27hit(27hit)